369 research outputs found

    Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems

    Full text link
    Approximate Bayesian computation methods can be used to evaluate posterior distributions without having to calculate likelihoods. In this paper we discuss and apply an approximate Bayesian computation (ABC) method based on sequential Monte Carlo (SMC) to estimate parameters of dynamical models. We show that ABC SMC gives information about the inferability of parameters and model sensitivity to changes in parameters, and tends to perform better than other ABC approaches. The algorithm is applied to several well known biological systems, for which parameters and their credible intervals are inferred. Moreover, we develop ABC SMC as a tool for model selection; given a range of different mathematical descriptions, ABC SMC is able to choose the best model using the standard Bayesian model selection apparatus.Comment: 26 pages, 9 figure

    Present and future evidence for evolving dark energy

    Get PDF
    We compute the Bayesian evidences for one- and two-parameter models of evolving dark energy, and compare them to the evidence for a cosmological constant, using current data from Type Ia supernova, baryon acoustic oscillations, and the cosmic microwave background. We use only distance information, ignoring dark energy perturbations. We find that, under various priors on the dark energy parameters, LambdaCDM is currently favoured as compared to the dark energy models. We consider the parameter constraints that arise under Bayesian model averaging, and discuss the implication of our results for future dark energy projects seeking to detect dark energy evolution. The model selection approach complements and extends the figure-of-merit approach of the Dark Energy Task Force in assessing future experiments, and suggests a significantly-modified interpretation of that statistic.Comment: 10 pages RevTex4, 3 figures included. Minor changes to match version accepted by PR

    Bayesian Multimodel Inference for Geostatistical Regression Models

    Get PDF
    The problem of simultaneous covariate selection and parameter inference for spatial regression models is considered. Previous research has shown that failure to take spatial correlation into account can influence the outcome of standard model selection methods. A Markov chain Monte Carlo (MCMC) method is investigated for the calculation of parameter estimates and posterior model probabilities for spatial regression models. The method can accommodate normal and non-normal response data and a large number of covariates. Thus the method is very flexible and can be used to fit spatial linear models, spatial linear mixed models, and spatial generalized linear mixed models (GLMMs). The Bayesian MCMC method also allows a priori unequal weighting of covariates, which is not possible with many model selection methods such as Akaike's information criterion (AIC). The proposed method is demonstrated on two data sets. The first is the whiptail lizard data set which has been previously analyzed by other researchers investigating model selection methods. Our results confirmed the previous analysis suggesting that sandy soil and ant abundance were strongly associated with lizard abundance. The second data set concerned pollution tolerant fish abundance in relation to several environmental factors. Results indicate that abundance is positively related to Strahler stream order and a habitat quality index. Abundance is negatively related to percent watershed disturbance

    Deviance Information Criterion for Comparing Stochastic Volatility Models

    Get PDF
    Bayesian methods have been e#cient in estimating parameters of stochastic volatility models for analyzing financial time series. Recent advances made it possible to fit stochastic volatility models of increasing complexity, including covariates, leverage effects, jump components and heavy-tailed distributions. However, a formal model comparison via Bayes factors remains difficult. The main objective of this paper is to demonstrate that model selection is more easily performed using the deviance information criterion (DIC). It combines a Bayesian measure-of-fit with a measure of model complexity. We illustrate the performance of DIC in discriminating between various di#erent stochastic volatility models using simulated data and daily returns data on the S&P100 index

    Anticipating the prevalence of avian influenza subtypes H9 and H5 in live-bird markets.

    Get PDF
    An ability to forecast the prevalence of specific subtypes of avian influenza viruses (AIV) in live-bird markets would facilitate greatly the implementation of preventative measures designed to minimize poultry losses and human exposure. The minimum requirement for developing predictive quantitative tools is surveillance data of AIV prevalence sampled frequently over several years. Recently, a 4-year time series of monthly sampling of hemagglutinin subtypes 1–13 in ducks, chickens and quail in live-bird markets in southern China has become available. We used these data to investigate whether a simple statistical model, based solely on historical data (variables such as the number of positive samples in host X of subtype Y time t months ago), could accurately predict prevalence of H5 and H9 subtypes in chickens. We also examined the role of ducks and quail in predicting prevalence in chickens within the market setting because between-species transmission is thought to occur within markets but has not been measured. Our best statistical models performed remarkably well at predicting future prevalence (pseudo-R2 = 0.57 for H9 and 0.49 for H5), especially considering the multi-host, multi-subtype nature of AIVs. We did not find prevalence of H5/H9 in ducks or quail to be predictors of prevalence in chickens within the Chinese markets. Our results suggest surveillance protocols that could enable more accurate and timely predictive statistical models. We also discuss which data should be collected to allow the development of mechanistic models.published_or_final_versio

    Comparing families of dynamic causal models

    Get PDF
    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data

    Combining estimates of interest in prognostic modelling studies after multiple imputation: current practice and guidelines

    Get PDF
    Background: Multiple imputation (MI) provides an effective approach to handle missing covariate data within prognostic modelling studies, as it can properly account for the missing data uncertainty. The multiply imputed datasets are each analysed using standard prognostic modelling techniques to obtain the estimates of interest. The estimates from each imputed dataset are then combined into one overall estimate and variance, incorporating both the within and between imputation variability. Rubin's rules for combining these multiply imputed estimates are based on asymptotic theory. The resulting combined estimates may be more accurate if the posterior distribution of the population parameter of interest is better approximated by the normal distribution. However, the normality assumption may not be appropriate for all the parameters of interest when analysing prognostic modelling studies, such as predicted survival probabilities and model performance measures. Methods: Guidelines for combining the estimates of interest when analysing prognostic modelling studies are provided. A literature review is performed to identify current practice for combining such estimates in prognostic modelling studies. Results: Methods for combining all reported estimates after MI were not well reported in the current literature. Rubin's rules without applying any transformations were the standard approach used, when any method was stated. Conclusion: The proposed simple guidelines for combining estimates after MI may lead to a wider and more appropriate use of MI in future prognostic modelling studies

    Bayesian Mode Regression

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Like mean, quantile and variance, mode is also an important measure of central tendency of a distribution. Many practical questions, particularly in the analysis of big data, such as \Which element (gene or le or signal) is the most typical one among all elements in a network?" are directly related to mode. Mode regression, which provides a convenient summary of how the regressors a ect the conditional mode, is totally di erent from other models based on conditional mean or conditional quantile or conditional variance. Some inference methods for mode regression exist but none of them is from the Bayesian perspective. This paper introduces Bayesian mode regression by exploring three different approaches, including their theoretic properties. The proposed approacher are illustrated using simulated datasets and a real data set
    corecore